Remaining Useful Life (RUL) estimation plays a critical role in Prognostics and Health Management (PHM). Traditional machine health maintenance systems are often costly, requiring sufficient prior expertise, and are difficult to fit into highly complex and changing industrial scenarios. With the widespread deployment of sensors on industrial equipment, building the Industrial Internet of Things (IIoT) to interconnect these devices has become an inexorable trend in the development of the digital factory. Using the device's real-time operational data collected by IIoT to get the estimated RUL through the RUL prediction algorithm, the PHM system can develop proactive maintenance measures for the device, thus, reducing maintenance costs and decreasing failure times during operation. This paper carries out research into the remaining useful life prediction model for multi-sensor devices in the IIoT scenario. We investigated the mainstream RUL prediction models and summarized the basic steps of RUL prediction modeling in this scenario. On this basis, a data-driven approach for RUL estimation is proposed in this paper. It employs a Multi-Head Attention Mechanism to fuse the multi-dimensional time-series data output from multiple sensors, in which the attention on features is used to capture the interactions between features and attention on sequences is used to learn the weights of time steps. Then, the Long Short-Term Memory Network is applied to learn the features of time series. We evaluate the proposed model on two benchmark datasets (C-MAPSS and PHM08), and the results demonstrate that it outperforms the state-of-art models. Moreover, through the interpretability of the multi-head attention mechanism, the proposed model can provide a preliminary explanation of engine degradation. Therefore, this approach is promising for predictive maintenance in IIoT scenarios.
translated by 谷歌翻译
无数据量化可以潜在地解决模型压缩中的数据隐私和安全问题,因此已得到广泛研究。最近,PSAQ-VIT设计了一个相对值度量,贴片相似性,以生成预训练视觉变压器(VIT)的数据,从而实现了VIT的第一次无数据量化尝试。在本文中,我们提出了PSAQ-VIT V2,这是在PSAQ-VIT之上建立的更准确,无数据的VIT的更准确和无数据的量化框架。更具体地说,按照PSAQ-VIT中的贴片相似性度量,我们引入了一种自适应的教师学生策略,该策略促进了生成的样品的持续环节演变和量化的模型(学生),并在竞争性和互动方式下以竞争性和互动方式进行。完整的模型(教师),因此显着提高了量化模型的准确性。此外,没有辅助类别指导,我们采用了任务和模型独立的先验信息,使通用方案与广泛的视觉任务和模型兼容。对图像分类,对象检测和语义分割任务和PSAQ-VIT V2进行了各种模型进行了广泛的实验,并具有幼稚的量化策略,并且没有访问现实世界数据,从而始终取得了竞争性的结果,显示出潜力作为强大的基线的潜力关于VIT的无数据量化。例如,使用SWIN-S作为(骨干)模型,8位量化达到ImageNet上的82.13 TOP-1精度,50.9盒AP和可可的44.1 Mask AP,而ADE20K上的47.2 miOU。我们希望准确,一般的PSAQ-VIT V2可以作为涉及敏感数据的现实应用程序中的潜在和实践解决方案。代码将在以下网址发布并合并:https://github.com/zkkli/psaq-vit。
translated by 谷歌翻译
视觉变压器最近在各种计算机视觉任务上取得了巨大成功。然而,他们的高模型复杂性使部署在资源约束设备上的挑战。量化是一种有效的方法,可以减少模型复杂性,并且可以在模型部署期间解决数据隐私和安全问题的无数据量化已获得广泛的兴趣。不幸的是,所有现有的方法(例如BN正则化)都是为卷积神经网络而设计的,不能应用于具有明显不同模型体系结构的视觉变压器。在本文中,我们提出了PSAQ-VIT,这是视觉变压器的贴片相似性无数据量化框架,以根据视觉变压器的唯一属性来生成“现实”样品,以校准量化参数。具体而言,我们分析了自我发场模块的特性,并在处理高斯噪声和真实图像的处理中揭示了一般差异(斑块相似性)。以上见解指导我们设计一个相对值度量,以优化高斯噪声以近似真实的图像,然后将其用于校准量化参数。对各种基准进行了广泛的实验和消融研究,以验证PSAQ-VIT的有效性,这甚至可以优于实现DATA驱动的方法。
translated by 谷歌翻译
很少有动作识别旨在仅使用几个样本(支持)识别新颖的动作类(查询)。当前的大多数方法遵循公制学习范式,该范式学会比较视频之间的相似性。最近,已经观察到,直接测量这种相似性并不理想,因为不同的动作实例可能显示出独特的时间分布,从而导致查询和支持视频中严重的未对准问题。在本文中,我们从两个不同的方面释放了这个问题 - 行动持续时间的错位和动作演化错位。我们通过两阶段的动作对准网络(TA2N)顺序解决它们。第一阶段通过学习暂时的仿射变换来定位动作,该变换扭曲了每个视频功能的动作持续时间,同时否定了动作 - 欧元的功能(例如背景)。接下来,第二阶段协调查询功能通过执行时间重排和空间抵消预测来匹配支撑的时空动作演变。基准数据集上的广泛实验显示了该方法在实现最新性能方面的潜力,以获得几次动作识别。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译